🌻 Saturation, scale and attention

9 Nov 2025

"Reached saturation" mostly means "I've found out about as much on this topic as will fit in any human's head, and my supervisor is happy" doesn't it?
And yes, AIs can in many senses fit more in their heads than humans can. But how many answers do you want?

But just because we can interview 80 thousand people, does that mean we should?

What would we do with all that information? ​There is a competition for attention. 

And actually not just attention in the sense of some kind of narrow spotlight we shine on one thing and then another (out of many), but something like "engagement" — understanding, making/sharing sense, even caring. 

I think many of us have come across the phenomenon where an AI produces a perfectly adequate or even compelling narrative analysis of something but our eyes kind of glaze over and it's hard to care or follow. 

This has nothing to do with the ability of the language model, it's got to do with how its products or outputs fit into our world. It wouldn't be any different if the analyses are produced by humans, angels, aliens or LLMs.

Of course it's perfectly possible to give an LLM more information about who we are and what we care about, and ask it to produce results which fit us better, it can do that, up to a point. But we sometimes still struggle to care. Perhaps because we didn't get our hands dirty enough writing the report, or don't have enough skin in the game. That is not a limitation of the LLMs, or of the angels or aliens. It's a by-product of the fact that we can now get almost free, mostly adequate, sometimes even astounding, results to a completely overwhelming range of questions.

What is the Point of Us. A Sci-Fi Story for Researchers